Goto

Collaborating Authors

 collective opinion


Spiral of Silence in Large Language Model Agents

Zhong, Mingze, Fang, Meng, Shi, Zijing, Huang, Yuxuan, Zheng, Shunfeng, Du, Yali, Chen, Ling, Wang, Jun

arXiv.org Artificial Intelligence

The Spiral of Silence (SoS) theory holds that individuals with minority views often refrain from speaking out for fear of social isolation, enabling majority positions to dominate public discourse. When the 'agents' are large language models (LLMs), however, the classical psychological explanation is not directly applicable, since SoS was developed for human societies. This raises a central question: can SoS-like dynamics nevertheless emerge from purely statistical language generation in LLM collectives? We propose an evaluation framework for examining SoS in LLM agents. Specifically, we consider four controlled conditions that systematically vary the availability of 'History' and 'Persona' signals. Opinion dynamics are assessed using trend tests such as Mann-Kendall and Spearman's rank, along with concentration measures including kurtosis and interquartile range. Experiments across open-source and closed-source models show that history and persona together produce strong majority dominance and replicate SoS patterns; history signals alone induce strong anchoring; and persona signals alone foster diverse but uncorrelated opinions, indicating that without historical anchoring, SoS dynamics cannot emerge. The work bridges computational sociology and responsible AI design, highlighting the need to monitor and mitigate emergent conformity in LLM-agent systems.


Analytical and Empirical Study of Herding Effects in Recommendation Systems

Xie, Hong, Zhong, Mingze, Lian, Defu, Wang, Zhen, Chen, Enhong

arXiv.org Artificial Intelligence

Online rating systems are often used in numerous web or mobile applications, e.g., Amazon and TripAdvisor, to assess the ground-truth quality of products. Due to herding effects, the aggregation of historical ratings (or historical collective opinion) can significantly influence subsequent ratings, leading to misleading and erroneous assessments. We study how to manage product ratings via rating aggregation rules and shortlisted representative reviews, for the purpose of correcting the assessment error. We first develop a mathematical model to characterize important factors of herding effects in product ratings. We then identify sufficient conditions (via the stochastic approximation theory), under which the historical collective opinion converges to the ground-truth collective opinion of the whole user population. These conditions identify a class of rating aggregation rules and review selection mechanisms that can reveal the ground-truth product quality. We also quantify the speed of convergence (via the martingale theory), which reflects the efficiency of rating aggregation rules and review selection mechanisms. We prove that the herding effects slow down the speed of convergence while an accurate review selection mechanism can speed it up. We also study the speed of convergence numerically and reveal trade-offs in selecting rating aggregation rules and review selection mechanisms. To show the utility of our framework, we design a maximum likelihood algorithm to infer model parameters from ratings, and conduct experiments on rating datasets from Amazon and TripAdvisor. We show that proper recency aware rating aggregation rules can improve the speed of convergence in Amazon and TripAdvisor by 41% and 62% respectively.


A Model to Support Collective Reasoning: Formalization, Analysis and Computational Assessment

Ganzer, Jordi (King's College London) | Criado, Natalia (King's College London) | Lopez-Sanchez, Maite (University of Barcelona) | Parsons, Simon (University of Lincoln) | Rodriguez-Aguilar, Juan A. (Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC))

Journal of Artificial Intelligence Research

In this paper we propose a new model to represent human debates and methods to obtain collective conclusions from them. This model overcomes two drawbacks of existing approaches. First, our model does not assume that participants agree on the structure of the debate. It does this by allowing participants to express their opinion about all aspects of the debate. Second, our model does not assume that participants' opinions are rational, an assumption that significantly limits current approaches. Instead, we define a weaker notion of rationality that characterises coherent opinions, and we consider different scenarios based on the coherence of individual opinions and the level of consensus. We provide a formal analysis of different opinion aggregation functions that compute a collective decision based on the individual opinions and the debate structure. In particular, we demonstrate that aggregated opinions can be coherent even if there is a lack of consensus and individual opinions are not coherent. We conclude with an empirical evaluation demonstrating that collective opinions can be computed efficiently for real-sized debates.


A model to support collective reasoning: Formalization, analysis and computational assessment

Ganzer, Jordi, Criado, Natalia, Lopez-Sanchez, Maite, Parsons, Simon, Rodriguez-Aguilar, Juan A.

arXiv.org Artificial Intelligence

Inspired by e-participation systems, in this paper we propose a new model to represent human debates and methods to obtain collective conclusions from them. This model overcomes drawbacks of existing approaches by allowing users to introduce new pieces of information into the discussion, to relate them to existing pieces, and also to express their opinion on the pieces proposed by other users. In addition, our model does not assume that users' opinions are rational in order to extract information from it, an assumption that significantly limits current approaches. Instead, we define a weaker notion of rationality that characterises coherent opinions, and we consider different scenarios based on the coherence of individual opinions and the level of consensus that users have on the debate structure. Considering these two factors, we analyse the outcomes of different opinion aggregation functions that compute a collective decision based on the individual opinions and the debate structure. In particular, we demonstrate that aggregated opinions can be coherent even if there is a lack of consensus and individual opinions are not coherent. We conclude our analysis with a computational evaluation demonstrating that collective opinions can be computed efficiently for real-sized debates.


[Video] The role of AI in native advertising – and how to use it effectively

#artificialintelligence

Does artificial intelligence (AI) have a place in native advertising? Can it help marketers succeed with native advertising? The Native Advertising Institute asked Dale Lovell, chief digital officer at Adyoulike, an ad tech platform that integrated IBMs Watson AI software in 2016. Below are highlights from the interview which have been slightly edited for clarity. "I believe that AI tools, AI native will really free marketers up to be more creative. Today, marketing departments are tasked with analysing so much data and compiling reports that go to clients that sometimes don't get read because the client is sifting through reports that they can't understand. There's just too much data in many ways for the human mind to process. So you could either hire a thousand people in your team -- which is not scalable -- or you could use an AI tool to help create insights that inform your marketing and effectively lets marketers do what they do best which is be creative."